Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840
Add a FailoverChannel wrapper on top of IsolationChannel to maintain a set of primary and failover channel.#37840parveensania wants to merge 8 commits intoapache:masterfrom
Conversation
… and fallback channel
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the robustness of gRPC communication within Dataflow workers by implementing a failover mechanism for channels. It allows the system to gracefully handle primary channel connectivity issues by switching to a fallback channel and periodically attempting to restore the primary connection, thereby enhancing the overall stability and reliability of the worker's interaction with the Windmill service. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
R: @scwhittle |
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
|
assign set of reviewers |
|
Assigning reviewers: R: @Abacn added as fallback since no labels match configuration Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
|
stop reviewer notifications |
|
Stopping reviewer notifications for this pull request: requested by reviewer. If you'd like to restart, comment |
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
...va/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java
Outdated
Show resolved
Hide resolved
|
When looking for similar implementations, I came across GcpMultiEndpointChannel https://github.com/GoogleCloudPlatform/grpc-gcp-java/blob/master/grpc-gcp/src/main/java/com/google/cloud/grpc/GcpMultiEndpointChannel.java GcpMultiEndpointChannel uses the channel ConnectivityStatus to determine which channel to use. Will it be more robust, if FailoverChannel uses ConnectivityStatus instead if RPC status to failover? Thinking something like wait X seconds for primary to become ready the first time and failover to the fallback channel if it takes more than X minutes. We can let primary retry connections in the background and switch to it whenever it becomes ready. |
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
I went for a hybrid approach, check both connection state + RPC status. ConnectionState could be transient errors, so we move back to primary as soon as state changes to READY. RPC status can capture server side issues too, like backend not responding (for instance requests getting rejected by security policies, there could be other reasons too). For this I've used longer cooling period before we re-try primary. WDYT? |
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Show resolved
Hide resolved
| currentFlowControlSettings), | ||
| currentFlowControlSettings.getOnReadyThresholdBytes()); | ||
| ManagedChannel primaryChannel = | ||
| IsolationChannel.create( |
There was a problem hiding this comment.
since it's being setup this way IsolationChannel connectivity callbacks are going to be what is used. I'm not sure how that will work since it internally has multiple channels. Looking it seems just has the default ManagedChannel implementation which throws unimplemented exception.
What about having IsolationChannel on top of fallback channels? That seems simpler to me since IsolationChannel just internally creates the separate channels and otherwise doesn't do much than forward things on.
It would be good to have a unit test of whatever setup we do use so that we flush out the issues there instead of requiring an integration test.
There was a problem hiding this comment.
Addressed this. IsolationChannel now wraps FailoverChannel which creates two channels per active RPC.
The original intent was to keep IsolationChannel unmodified (since it is used by dispatcher client) and handle fallback at per-worker level. The new ordering (IsolationChannel over FailoverChannel) changes the semantic to per-RPC failover. Which means in case of connectivity issues, each RPC would independently discover the failure and switch at different times, rather than switching together in a coordinated way.
I do agree managing state at per RPC level seems to be less error prone, but would like to callout this semantic change.
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...java/org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannel.java
Outdated
Show resolved
Hide resolved
...org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/IsolationChannelTest.java
Outdated
Show resolved
Hide resolved
...va/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java
Outdated
Show resolved
Hide resolved
.../org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannelTest.java
Outdated
Show resolved
Hide resolved
.../org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannelTest.java
Outdated
Show resolved
Hide resolved
.../org/apache/beam/runners/dataflow/worker/windmill/client/grpc/stubs/FailoverChannelTest.java
Outdated
Show resolved
Hide resolved
scwhittle
left a comment
There was a problem hiding this comment.
Just a couple more comments, thanks!
| * When toFallback is false (primary recovered) it clears all fallback flags and returns true if | ||
| * recovery actually changed state, so the caller can log it. | ||
| */ | ||
| synchronized boolean transitionFallback(boolean toFallback, long nowNanos) { |
There was a problem hiding this comment.
nit: how about two separate methods? the bool is just forking internally
I think then you could name them to reflect where they are called then,
markPrimaryReady()
notePrimaryRpcFailure()
There was a problem hiding this comment.
Oops dropped part of that comment, I think that we may also want to only transition to fallback after X continous seconds of rpc failures without any responses. If the method is notePrimaryRpcStatus(bool success) you can keep track of the time and then only fallback if the most recent failure is X seconds past first continuous observed.
Since we expect that requiring fallback to be very rare, it seems like we should be cautious to enable fallback.
| serviceAddress, | ||
| workerOptions.getWindmillServiceRpcChannelAliveTimeoutSec(), | ||
| currentFlowControlSettings), | ||
| remoteChannel( |
There was a problem hiding this comment.
maybe the fallback parameter should be Supplier that FailoverChannel will call at-most-once but which could internally defer to calling until we actually want to fall-over (can wrap the provided supplier with Suppliers.memoize). Then we could avoid creating a channel to dispatcher if we never fallback.
Adds a FailoverChannel wrapper class on top of IsolationChannels to maintain primary channel and failover channel and fallback to failover channel if connectivity over primary channel cannot be established. The primary channel will again be retried after a period.